Thesis initial ideas � the ethics of AI

Greg Detre

Monday, 07 May, 2001

 

Thesis initial ideas � the ethics of AI����������������� 1

Life/personhood���������������������������������������������������������������������������������� 1

Extinction��������������������������������������������������������������������������������������������� 1

Lifestyle changes��������������������������������������������������������������������������������� 2

Metaphysical/religious considerations������������������������������������������� 2

Problem of other minds���������������������������������������������������������������������� 2

Personal identity��������������������������������������������������������������������������������� 2

Cloning������������������������������������������������������������������������������������������������� 3

Experience machine���������������������������������������������������������������������������� 3

 

Life/personhood

If I can create A-life/AI that is indistinguishable from biological life/intelligence in terms of behaviour, and simply lacks a (biological/evolved/carbon-based) body, then surely it is just as wrong to delete these programs as it is to kill an animal of similar complexity. I can�t think of anything significant in principle that differentiates a robot from an animal/human.

This gets slightly more interesting when we approach human-level intelligence, since we have to consider granting them legal rights etc. Otherwise, the whole debate stagnates into arguments over Peter Singer and animal rights, doesn't it???

What about abortion? If a program wants to copy itself, and we refuse it the hard disk space, would the Catholic church have anything to say (probably not, given that they only care about God�s hand-crafted humanity)? It would be fun though, to use heavily anthropomorphic terminology to drive the point home.

Extinction

If I was to create a breed of indomitable, vicious, poisonous killer locusts, most people would be pretty unhappy. If they were to wipe out the human race, it would probably be an ethically wrong thing to do, because of all the suffering/death that would ensue etc. Is the case any different in the Terminator scenario, if I create an AI that builds itself robust, intelligent, human-like robot friends and they co-operate to install robo sapiens as the dominant species on the planet? Am I responsible, if I only create a little AI program for me to chat to on lonely afternoons, and unbeknownst to me, it recruits the hoover, the car and the computer controlling the world�s nuclear weapons�

It depends partly on the way they go about attaining power - if it involves wars and human suffering, then that's a bad thing. But if they are ultimately better-adjusted, more peaceable and more environmentally conscious, as well as more intelligent/conscious/artistic and capable of more sublime happiness, then isn't that a pretty good thing in consequentialist terms? Ultimately of course, if there are more robots than people, their utility will matter more than ours anyway. Alternatively, homo sapiens might just phase into robo sapiens (�The last human baby lacking any form of genetic engineering or robotic implants was born on June 26, 2037��)

This carbon chauvinism angle seems promising. In the case of a famine, are you obliged to divide resources equally between rice and electricity in order to keep the carbon and silicon members of your populace happy/alive?

Lifestyle changes

The development of full-blooded strong AI would probably be among the greatest upheavals to humanity imaginable, in innumerable ways.

For starters, computers could do most of the shitty/boring/dangerous jobs. However, we come back to the personhood angle. If a computer is intelligent enough to take over a roadsweeper or secretary's job, then mightn't it be bored and unhappy too? This comes back to links between consciousness and intelligence - if the computer is smart but appears to have no conscious experience (as far as we can tell... - problem of other minds) at all, then we're laughing. But if there is something inherent about the complexity/organisation that gives rise to intelligence, then forcing a computer-personality to manage sewer systems is cruel, and tantamount to slavery.

Either way, you end up with a humanity with more leisure time than it knows what to do with. Enter the virtue theorist, who tells us to go about achieving arete all the time.

Metaphysical/religious considerations

Besides the ways our lives would change, our perception of ourselves would change. Like the Copernican revolution or finding extra-terrestrial intelligence, creating AI would have an instant humbling (and hubristic) effect. In one moment, we would have demonstrated our own machine-like natures. No doubt the Church would twist doctrine like a bastard, and survive in modified form, though.

Problem of other minds

Not really an ethical question, though it clearly has major ramifications for how we treat machines. If we can show that a zombie machine can pass the Turing test, then that�s great news, because we can build a zombie slave-robot race to do our bidding. But if, as seems likely, the Turing test places certain requirements on both intelligence and consciousness (either because that level of complexity gives rise to conscious, or simply because it would not be possible to pass the Turing test without first being conscious), then at some stage, we have to start treating our machines with consideration (see life/personhood).

Personal identity

Is there anything special about the body a computer inhabits? Take robot miners - it's a dangerous job, and every so often miners get trapped or exploded. So we replace human miners with computer brains controlling digger robot bodies. Assuming the computer brains are physically instantiated in the robot bodies, then a big explosion will destroy the robot body and brain. If there is nothing more to ME or to the computer-personality than its computation, then backing it up every few minutes solves our problem. But if we have realised that the physical instantiation does play a vital role in consciousness in some way, then simply backing up the structure/activity of the brain may not be enough (just as there turned out to be more to the human soul than just memories, in Peter Hamilton).

Stranger still, what if we take Greg Egan's ('Transition dreams', in 'Luminous') thought experiment a bit further. There, it is simply the processing of information that gives rise to conscious experience, so any computation, e.g. copying the information across to back it up, gives rise to conscious experience, which might be painful or unhappy in some way.

Cloning

I'm not at all clear about this � I need to find out more. As I understood it, the problems centre around how much free will we have if we are able to clone/genetically-engineer/build designer babies, with specified intelligence, aggression, self-control etc.

Experience machine

If we could build immersive VR simulations that people could live in, could we live valuable lives within them? Or, if we could pump ourselves full of non-harmful pleasure drugs that leave us wandering around permanently in a state of bliss, is there any moral reason why we should choose not to? I have to re-read Nozick�s section on this � I remember someone saying that he�d pretty much wrapped things up though.